联合学习的目的是从多个分散设备(即客户)培训全球模型,而无需交换其私人本地数据。关键挑战是处理非i.i.d。 (独立分布的)数据,这些数据可能引起其本地功能的差异。我们介绍了超球联邦学习(球形)框架,以解决非i.i.d。通过限制学习数据点的学习表示,以在客户共享的单位超孔上。具体而言,所有客户都通过最大程度地减少固定分类器的损失来学习其本地表示,其权重跨度跨越了单位。在联合培训改善了全球模型后,通过最大程度地减少平方平方损失,通过封闭形式的解决方案进一步校准了该分类器。我们表明,可以有效地计算校准解决方案,而无需直接访问本地数据。广泛的实验表明,我们的球形方法能够通过相当大的利润率(在具有挑战性的数据集中达到6%)来提高多个现有联合学习算法的准确性,并具有增强的计算和跨数据集和模型架构的通信效率。
translated by 谷歌翻译
我们提出了一种方法,可以在神经SDF渲染器中相对于几何场景参数自动计算正确的梯度。最近基于物理的可区分渲染技术用于网格采样来处理不连续性,尤其是在对象轮廓上,但是SDF没有简单的参数形式,可用于采样。取而代之的是,我们的方法建立在区域采样技术的基础上,并为SDFS开发了连续的翘曲功能,以解决这些不连续性。我们的方法利用了在SDF中编码的表面的距离,并在球形示踪剂点上使用正交来计算此翘曲功能。我们进一步表明,这可以通过对要点进行次采样来使神经SDF的方法进行。我们可区分的渲染器可用于优化从多视图图像中的神经形状,并对最近基于SDF的反向渲染方法产生可比较的3D重建,而无需2D分割掩码来指导几何形状优化,而无需对几何形状进行体积近似。
translated by 谷歌翻译
我们提出了一种从单个图像中编辑复杂室内照明的方法,其深度和光源分割掩码。这是一个极具挑战性的问题,需要对复杂的光传输进行建模,并仅通过对场景的部分LDR观察,将HDR照明从材料和几何形状中解散。我们使用两个新颖的组件解决了这个问题:1)一种整体场景重建方法,该方法估计场景反射率和参数3D照明,以及2)一个神经渲染框架,从我们的预测中重新呈现场景。我们使用基于物理的室内光表示,可以进行直观的编辑,并推断可见和看不见的光源。我们的神经渲染框架结合了基于物理的直接照明和阴影渲染,深层网络近似于全球照明。它可以捕获具有挑战性的照明效果,例如柔软的阴影,定向照明,镜面材料和反射。以前的单个图像逆渲染方法通常纠缠场景照明和几何形状,仅支持对象插入等应用程序。取而代之的是,通过将参数3D照明估计与神经场景渲染相结合,我们演示了从单个图像中实现完整场景重新确定(包括光源插入,删除和替换)的第一种自动方法。所有源代码和数据将公开发布。
translated by 谷歌翻译
机器学习算法通常假设培训和测试示例是从相同的分布中汲取的。然而,分发转移是现实世界应用中的常见问题,并且可以在测试时间造成模型急剧执行。在本文中,我们特别考虑域移位和亚泊素班次的问题(例如,不平衡数据)。虽然先前的作品通常会寻求明确地将模型的内部表示和预测器进行明确,以成为域不变的,但我们旨在规范整个功能而不限制模型的内部表示。这导致了一种简单的基于混合技术,它通过名为LISA的选择性增强来学习不变函数。 Lisa选择性地用相同的标签而单独地插值样本,但不同的域或具有相同的域但不同的标签。我们分析了线性设置,从理论上展示了LISA如何导致较小的最差组错误。凭经验,我们研究了LISA对从亚本化转变到域移位的九个基准的有效性,我们发现LISA一直以其他最先进的方法表达。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
Recent advances in neural radiance fields have enabled the high-fidelity 3D reconstruction of complex scenes for novel view synthesis. However, it remains underexplored how the appearance of such representations can be efficiently edited while maintaining photorealism. In this work, we present PaletteNeRF, a novel method for photorealistic appearance editing of neural radiance fields (NeRF) based on 3D color decomposition. Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases (i.e., 3D segmentations defined by a group of NeRF-type functions) that are shared across the scene. While our palette-based bases are view-independent, we also predict a view-dependent function to capture the color residual (e.g., specular shading). During training, we jointly optimize the basis functions and the color palettes, and we also introduce novel regularizers to encourage the spatial coherence of the decomposition. Our method allows users to efficiently edit the appearance of the 3D scene by modifying the color palettes. We also extend our framework with compressed semantic features for semantic-aware appearance editing. We demonstrate that our technique is superior to baseline methods both quantitatively and qualitatively for appearance editing of complex real-world scenes.
translated by 谷歌翻译
The rapid growth of machine translation (MT) systems has necessitated comprehensive studies to meta-evaluate evaluation metrics being used, which enables a better selection of metrics that best reflect MT quality. Unfortunately, most of the research focuses on high-resource languages, mainly English, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from English, and to date, there has not been a systematic study of evaluating MT systems from English into Indian languages. In this paper, we fill this gap by creating an MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, and use it to establish correlations between annotator scores and scores obtained using existing automatic metrics. Our results show that pre-trained metrics, such as COMET, have the highest correlations with annotator scores. Additionally, we find that the metrics do not adequately capture fluency-based errors in Indian languages, and there is a need to develop metrics focused on Indian languages. We hope that our dataset and analysis will help promote further research in this area.
translated by 谷歌翻译
Flooding is one of the most disastrous natural hazards, responsible for substantial economic losses. A predictive model for flood-induced financial damages is useful for many applications such as climate change adaptation planning and insurance underwriting. This research assesses the predictive capability of regressors constructed on the National Flood Insurance Program (NFIP) dataset using neural networks (Conditional Generative Adversarial Networks), decision trees (Extreme Gradient Boosting), and kernel-based regressors (Gaussian Process). The assessment highlights the most informative predictors for regression. The distribution for claims amount inference is modeled with a Burr distribution permitting the introduction of a bias correction scheme and increasing the regressor's predictive capability. Aiming to study the interaction with physical variables, we incorporate Daymet rainfall estimation to NFIP as an additional predictor. A study on the coastal counties in the eight US South-West states resulted in an $R^2=0.807$. Further analysis of 11 counties with a significant number of claims in the NFIP dataset reveals that Extreme Gradient Boosting provides the best results, that bias correction significantly improves the similarity with the reference distribution, and that the rainfall predictor strengthens the regressor performance.
translated by 谷歌翻译
Recent advancements in sensing and communication facilitate obtaining high-frequency real-time data from various physical systems like power networks, climate systems, biological networks, etc. However, since the data are recorded by physical sensors, it is natural that the obtained data is corrupted by measurement noise. In this paper, we present a novel algorithm for online real-time learning of dynamical systems from noisy time-series data, which employs the Robust Koopman operator framework to mitigate the effect of measurement noise. The proposed algorithm has three main advantages: a) it allows for online real-time monitoring of a dynamical system; b) it obtains a linear representation of the underlying dynamical system, thus enabling the user to use linear systems theory for analysis and control of the system; c) it is computationally fast and less intensive than the popular Extended Dynamic Mode Decomposition (EDMD) algorithm. We illustrate the efficiency of the proposed algorithm by applying it to identify the Van der Pol oscillator, the IEEE 68 bus system, and a ring network of Van der Pol oscillators.
translated by 谷歌翻译
Knowledge about outcomes is critical for complex event understanding but is hard to acquire. We show that by pre-identifying a participant in a complex event, crowd workers are able to (1) infer the collective impact of salient events that make up the situation, (2) annotate the volitional engagement of participants in causing the situation, and (3) ground the outcome of the situation in state changes of the participants. By creating a multi-step interface and a careful quality control strategy, we collect a high quality annotated dataset of 8K short newswire narratives and ROCStories with high inter-annotator agreement (0.74-0.96 weighted Fleiss Kappa). Our dataset, POQue (Participant Outcome Questions), enables the exploration and development of models that address multiple aspects of semantic understanding. Experimentally, we show that current language models lag behind human performance in subtle ways through our task formulations that target abstract and specific comprehension of a complex event, its outcome, and a participant's influence over the event culmination.
translated by 谷歌翻译